我们通过无监督学习的角度探索语义对应估计。我们使用标准化的评估协议彻底评估了最近提出的几种跨多个挑战数据集的无监督方法,在该协议中,我们会改变诸如骨干架构,预训练策略以及预训练和填充数据集等因素。为了更好地了解这些方法的故障模式,并为了提供更清晰的改进途径,我们提供了一个新的诊断框架以及一个新的性能指标,该指标更适合于语义匹配任务。最后,我们引入了一种新的无监督的对应方法,该方法利用了预训练的功能的强度,同时鼓励在训练过程中进行更好的比赛。与当前的最新方法相比,这会导致匹配性能明显更好。
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
Deformable image registration is a key task in medical image analysis. The Brain Tumor Sequence Registration challenge (BraTS-Reg) aims at establishing correspondences between pre-operative and follow-up scans of the same patient diagnosed with an adult brain diffuse high-grade glioma and intends to address the challenging task of registering longitudinal data with major tissue appearance changes. In this work, we proposed a two-stage cascaded network based on the Inception and TransMorph models. The dataset for each patient was comprised of a native pre-contrast (T1), a contrast-enhanced T1-weighted (T1-CE), a T2-weighted (T2), and a Fluid Attenuated Inversion Recovery (FLAIR). The Inception model was used to fuse the 4 image modalities together and extract the most relevant information. Then, a variant of the TransMorph architecture was adapted to generate the displacement fields. The Loss function was composed of a standard image similarity measure, a diffusion regularizer, and an edge-map similarity measure added to overcome intensity dependence and reinforce correct boundary deformation. We observed that the addition of the Inception module substantially increased the performance of the network. Additionally, performing an initial affine registration before training the model showed improved accuracy in the landmark error measurements between pre and post-operative MRIs. We observed that our best model composed of the Inception and TransMorph architectures while using an initially affine registered dataset had the best performance with a median absolute error of 2.91 (initial error = 7.8). We achieved 6th place at the time of model submission in the final testing phase of the BraTS-Reg challenge.
translated by 谷歌翻译
In this paper, we consider incorporating data associated with the sun's north and south polar field strengths to improve solar flare prediction performance using machine learning models. When used to supplement local data from active regions on the photospheric magnetic field of the sun, the polar field data provides global information to the predictor. While such global features have been previously proposed for predicting the next solar cycle's intensity, in this paper we propose using them to help classify individual solar flares. We conduct experiments using HMI data employing four different machine learning algorithms that can exploit polar field information. Additionally, we propose a novel probabilistic mixture of experts model that can simply and effectively incorporate polar field data and provide on-par prediction performance with state-of-the-art solar flare prediction algorithms such as the Recurrent Neural Network (RNN). Our experimental results indicate the usefulness of the polar field data for solar flare prediction, which can improve Heidke Skill Score (HSS2) by as much as 10.1%.
translated by 谷歌翻译
We present the development of a semi-supervised regression method using variational autoencoders (VAE), which is customized for use in soft sensing applications. We motivate the use of semi-supervised learning considering the fact that process quality variables are not collected at the same frequency as other process variables leading to many unlabelled records in operational datasets. These unlabelled records are not possible to use for training quality variable predictions based on supervised learning methods. Use of VAEs for unsupervised learning is well established and recently they were used for regression applications based on variational inference procedures. We extend this approach of supervised VAEs for regression (SVAER) to make it learn from unlabelled data leading to semi-supervised VAEs for regression (SSVAER), then we make further modifications to their architecture using additional regularization components to make SSVAER well suited for learning from both labelled and unlabelled process data. The probabilistic regressor resulting from the variational approach makes it possible to estimate the variance of the predictions simultaneously, which provides an uncertainty quantification along with the generated predictions. We provide an extensive comparative study of SSVAER with other publicly available semi-supervised and supervised learning methods on two benchmark problems using fixed-size datasets, where we vary the percentage of labelled data available for training. In these experiments, SSVAER achieves the lowest test errors in 11 of the 20 studied cases, compared to other methods where the second best gets 4 lowest test errors out of the 20.
translated by 谷歌翻译
We consider the problem of decision-making under uncertainty in an environment with safety constraints. Many business and industrial applications rely on real-time optimization with changing inputs to improve key performance indicators. In the case of unknown environmental characteristics, real-time optimization becomes challenging, particularly for the satisfaction of safety constraints. We propose the ARTEO algorithm, where we cast multi-armed bandits as a mathematical programming problem subject to safety constraints and learn the environmental characteristics through changes in optimization inputs and through exploration. We quantify the uncertainty in unknown characteristics by using Gaussian processes and incorporate it into the utility function as a contribution which drives exploration. We adaptively control the size of this contribution using a heuristic in accordance with the requirements of the environment. We guarantee the safety of our algorithm with a high probability through confidence bounds constructed under the regularity assumptions of Gaussian processes. Compared to existing safe-learning approaches, our algorithm does not require an exclusive exploration phase and follows the optimization goals even in the explored points, which makes it suitable for safety-critical systems. We demonstrate the safety and efficiency of our approach with two experiments: an industrial process and an online bid optimization benchmark problem.
translated by 谷歌翻译
A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification.
translated by 谷歌翻译
Early evaluation of patients who require special care and who have high death-expectancy in COVID-19, and the effective determination of relevant biomarkers on large sample-groups are important to reduce mortality. This study aimed to reveal the routine blood-value predictors of COVID-19 mortality and to determine the lethal-risk levels of these predictors during the disease process. The dataset of the study consists of 38 routine blood-values of 2597 patients who died (n = 233) and those who recovered (n = 2364) from COVID-19 in August-December, 2021. In this study, the histogram-based gradient-boosting (HGB) model was the most successful machine-learning classifier in detecting living and deceased COVID-19 patients (with squared F1 metrics F1^2 = 1). The most efficient binary combinations with procalcitonin were obtained with D-dimer, ESR, D-Bil and ferritin. The HGB model operated with these feature pairs correctly detected almost all of the patients who survived and those who died (precision > 0.98, recall > 0.98, F1^2 > 0.98). Furthermore, in the HGB model operated with a single feature, the most efficient features were procalcitonin (F1^2 = 0.96) and ferritin (F1^2 = 0.91). In addition, according to the two-threshold approach, ferritin values between 376.2 mkg/L and 396.0 mkg/L (F1^2 = 0.91) and pro-calcitonin values between 0.2 mkg/L and 5.2 mkg/L (F1^2 = 0.95) were found to be fatal risk levels for COVID-19. Considering all the results, we suggest that many features combined with these features, especially procalcitonin and ferritin, operated with the HGB model, can be used to achieve very successful results in the classification of those who live, and those who die from COVID-19. Moreover, we strongly recommend that clinicians consider the critical levels we have found for procalcitonin and ferritin properties, to reduce the lethality of the COVID-19 disease.
translated by 谷歌翻译
在本说明中,我研究了制度和游戏理论假设,这些假设将阻止AI*表示的“超人级”弧形通用智能的出现。这些假设是(i)“心灵自由”,(ii)开源“访问” AI*,以及(iii)与AI*竞争的代表人类代理人的合理性。我证明,在这三个假设下,AI*不可能存在。该结果引起了公共政策的两个即时建议。首先,“克隆”以数字方式受到严格调节,并应禁止假设的脑部进入大脑。其次,如果不公开,应广泛进行AI*研究。
translated by 谷歌翻译
EEG信号是复杂且低频信号。因此,它们很容易受到外部因素的影响。脑电图伪像的去除对于神经科学至关重要,因为伪影对脑电图分析的结果有重大影响。在这些文物中,去除眼伪影是最具挑战性的。在这项研究中,通过开发基于双向长期记忆(BILSTM)的深度学习(DL)模型来提出一种新型的眼部伪像去除方法。我们创建了一个基准测试数据集,通过组合Eegdenoisenet和DEAP数据集来训练和测试提出的DL模型。我们还通过以各种SNR级别的EOG污染地面真相清洁的脑电图来增强数据。然后,使用小波同步转换(WSST)获得的高定位时频(TF)系数(WSST)获得的高定位时频(TF)系数,将Bilstm网络馈送到从增强信号中提取的特征。我们还将基于WSST的DL模型结果与传统TF分析(TFA)方法进行比较,即短期傅立叶变换(STFT)和连续小波转换(CWT)以及增强原始信号。最佳的平均MSE值为0.3066是通过首次基于BilstM的WSST-NET模型获得的。我们的结果表明,与传统的TF和原始信号方法相比,WSST-NET模型显着改善了伪影的性能。此外,提出的EOG去除方法表明,它的表现优于文献中许多基于常规和DL的眼神伪像去除方法。
translated by 谷歌翻译